在本文中,我们介绍了一个高质量的大规模基准数据集,用于英语 - 越南语音翻译,其中有508音频小时,由331k的三胞胎组成(句子长度的音频,英语源笔录句,越南人目标subtitle句子)。我们还使用强基础进行了经验实验,发现传统的“级联”方法仍然优于现代“端到端”方法。据我们所知,这是第一个大规模的英语 - 越南语音翻译研究。我们希望我们的公开数据集和研究都可以作为未来研究和英语语音翻译应用的起点。我们的数据集可从https://github.com/vinairesearch/phost获得
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
In this work, we propose a new approach that combines data from multiple sensors for reliable obstacle avoidance. The sensors include two depth cameras and a LiDAR arranged so that they can capture the whole 3D area in front of the robot and a 2D slide around it. To fuse the data from these sensors, we first use an external camera as a reference to combine data from two depth cameras. A projection technique is then introduced to convert the 3D point cloud data of the cameras to its 2D correspondence. An obstacle avoidance algorithm is then developed based on the dynamic window approach. A number of experiments have been conducted to evaluate our proposed approach. The results show that the robot can effectively avoid static and dynamic obstacles of different shapes and sizes in different environments.
translated by 谷歌翻译
This study proposes an approach for establishing an optimal multihop ad-hoc network using multiple unmanned aerial vehicles (UAVs) to provide emergency communication in disaster areas. The approach includes two stages, one uses particle swarm optimization (PSO) to find optimal positions to deploy UAVs, and the other uses a behavior-based controller to navigate the UAVs to their assigned positions without colliding with obstacles in an unknown environment. Several constraints related to the UAVs' sensing and communication ranges have been imposed to ensure the applicability of the proposed approach in real-world scenarios. A number of simulation experiments with data loaded from real environments have been conducted. The results show that our proposed approach is not only successful in establishing multihop ad-hoc routes but also meets the requirements for real-time deployment of UAVs.
translated by 谷歌翻译
Self-Supervised Learning (SSL) is crucial for real-world applications, especially in data-hungry domains such as healthcare and self-driving cars. In addition to a lack of labeled data, these applications also suffer from distributional shifts. Therefore, an SSL method should provide robust generalization and uncertainty estimation in the test dataset to be considered a reliable model in such high-stakes domains. However, existing approaches often focus on generalization, without evaluating the model's uncertainty. The ability to compare SSL techniques for improving these estimates is therefore critical for research on the reliability of self-supervision models. In this paper, we explore variants of SSL methods, including Jigsaw Puzzles, Context, Rotation, Geometric Transformations Prediction for vision, as well as BERT and GPT for language tasks. We train SSL in auxiliary learning for vision and pre-training for language model, then evaluate the generalization (in-out classification accuracy) and uncertainty (expected calibration error) across different distribution covariate shift datasets, including MNIST-C, CIFAR-10-C, CIFAR-10.1, and MNLI. Our goal is to create a benchmark with outputs from experiments, providing a starting point for new SSL methods in Reliable Machine Learning. All source code to reproduce results is available at https://github.com/hamanhbui/reliable_ssl_baselines.
translated by 谷歌翻译
Semantic communication (SemCom) and edge computing are two disruptive solutions to address emerging requirements of huge data communication, bandwidth efficiency and low latency data processing in Metaverse. However, edge computing resources are often provided by computing service providers and thus it is essential to design appealingly incentive mechanisms for the provision of limited resources. Deep learning (DL)- based auction has recently proposed as an incentive mechanism that maximizes the revenue while holding important economic properties, i.e., individual rationality and incentive compatibility. Therefore, in this work, we introduce the design of the DLbased auction for the computing resource allocation in SemComenabled Metaverse. First, we briefly introduce the fundamentals and challenges of Metaverse. Second, we present the preliminaries of SemCom and edge computing. Third, we review various incentive mechanisms for edge computing resource trading. Fourth, we present the design of the DL-based auction for edge resource allocation in SemCom-enabled Metaverse. Simulation results demonstrate that the DL-based auction improves the revenue while nearly satisfying the individual rationality and incentive compatibility constraints.
translated by 谷歌翻译
公平感知机器学习的文献知道很多不同的公平概念。然而,众所周知,不可能满足所有人,因为某些概念相互矛盾。在本文中,我们仔细研究了学业绩效预测(APP)系统,并尝试提出最适合这项任务的公平性。为此,我们扫描了最近的文献提出了有关使用哪种公平概念并将这些准则应用于应用程序的准则。我们的发现表明,基于App的Wysiwyg Worldview以及对人群的潜在长期改善,均等的赔率是最适合APP的概念。
translated by 谷歌翻译
学习曲线的元学习是机器学习社区中一个重要但经常被忽视的研究领域。我们介绍了一系列基于学习的基于学习的元学习挑战,其中代理商根据来自环境的学习曲线的反馈来寻找适合给定数据集的最佳算法。第一轮吸引了学术界和工业的参与者。本文分析了第一轮的结果(被WCCI 2022的竞争计划接受),以了解使元学习者成功从学习曲线学习的东西。通过从第一轮中学到的教训以及参与者的反馈,我们通过新的协议和新的元数据设计设计了第二轮挑战。我们的第二轮挑战在2022年Automl-Conf中被接受,目前正在进行中。
translated by 谷歌翻译
通过乳房X线摄影进行准确的乳腺癌诊断有可能挽救世界各地数百万的生命。深度学习(DL)方法已证明对乳房X线照片中的质量检测非常有效。当前DL模型的进一步改进将进一步提高这些方法的有效性。在这种情况下,关键问题是如何为DL模型选择正确的超参数。在本文中,我们提出了GA-E2E,这是一种使用遗传算法(GAS)调整Brest癌症检测的DL模型超参数的新方法。我们的发现表明,参数值的差异可以大大改变曲线下的面积(AUC),该区域用于确定分类器的性能。
translated by 谷歌翻译
本文描述了基于RRMSE(相对均方根误差)的权重,以在平均集合投票回归之前的预测值发生。整体回归背后的核心思想是结合几个基本回归模型,以通过数字连续目标变量来提高学习问题的预测性能。合奏投票回归的默认权重设置是统一权重,没有学习任务的领域知识,为预测分配权重是不可能的,这使得很难改善预测。这项工作试图通过实施基于RRMSE的加权函数来改善投票回归的预测。实验表明,与六个流行的回归学习数据集上的其他最先进的合奏回归算法相比,RRMSE投票回归器的预测能够更好地预测。
translated by 谷歌翻译